尽管深度神经网络(DNNS)的对抗性鲁棒性研究取得了迅速的进展,但对于时间序列领域,几乎没有原则上的工作。由于时间序列数据出现在包括移动健康,金融和智能电网在内的各种应用中,因此验证和改善DNN在时间序列域的鲁棒性很重要。在本文中,我们为时间序列域提出了一个新颖的框架,称为{\ em动态时间扭曲,用于使用动态时间扭曲度量,以实现对抗性鲁棒性(DTW-AR)}。提供了理论和经验证据,以证明DTW对图像结构域的先前方法中采用的标准欧几里得距离度量的有效性。我们通过理论分析开发了一种原则性的算法,可以使用随机比对路径有效地创建各种对抗性示例。对不同现实世界基准的实验表明,DTW-AR对愚弄DNNS的有效性来欺骗时间序列数据,并使用对抗性训练提高其鲁棒性。 DTW-AR算法的源代码可在https://github.com/tahabelkhouja/dtw-ar上获得
translated by 谷歌翻译
时间序列数据在许多现实世界中(例如,移动健康)和深神经网络(DNNS)中产生,在解决它们方面已取得了巨大的成功。尽管他们成功了,但对他们对对抗性攻击的稳健性知之甚少。在本文中,我们提出了一个通过统计特征(TSA-STAT)}称为时间序列攻击的新型对抗框架}。为了解决时间序列域的独特挑战,TSA-STAT对时间序列数据的统计特征采取限制来构建对抗性示例。优化的多项式转换用于创建比基于加性扰动的攻击(就成功欺骗DNN而言)更有效的攻击。我们还提供有关构建对抗性示例的统计功能规范的认证界限。我们对各种现实世界基准数据集的实验表明,TSA-STAT在欺骗DNN的时间序列域和改善其稳健性方面的有效性。 TSA-STAT算法的源代码可在https://github.com/tahabelkhouja/time-series-series-attacks-via-statity-features上获得
translated by 谷歌翻译
用于现实世界应用程序的时间序列分类器的安全部署依赖于检测未从与培训数据相同的分布生成的数据的能力。此任务称为离分布(OOD)检测。我们考虑了时间序列域的OOD检测的新问题。我们讨论了时间序列数据带来的独特挑战,并解释了为什么来自图像域的先前方法会表现不佳。受这些挑战的激励,本文提出了一种新颖的{\ em季节性评分(SRS)}方法。 SRS由三个关键算法步骤组成。首先,将每个输入分解为类别的语义组件和余数。其次,使用这种分解来估计输入的阶级条件可能性和使用深层生成模型的条件。从这些估计值中计算出季节性比率得分。第三,从分布数据中确定阈值间隔以检测OOD示例。对不同现实世界基准的实验表明,与基线方法相比,SRS方法非常适合于时间序列OOD检测。 https://github.com/tahabelkhouja/srs提供了SRS方法的开源代码
translated by 谷歌翻译
尽管深度神经网络(DNNS)成功地超过了时间序列数据(例如移动健康),但由于与图像和文本数据相比,由于其独特的特征,如何训练如何训练稳健的DNN为时间序列域而言知之甚少。 。在本文中,我们提出了一个新颖的算法框架,称为时间序列(RO-TS)的强大训练,以创建适合时间序列分类任务的强大DNN。具体而言,我们通过根据基于全局对齐内核(GAK)距离测量的时间序列输入来明确推理鲁棒性标准,从而在模型参数上提出了最小值优化问题。我们还使用gak和动态时间扭曲(DTW),使用求和结构比对时间序列对齐的求和结构来展示我们的公式的普遍性和优势。这个问题是一个组成的最低 - 最大优化问题家族的实例,这是具有挑战性的,并且没有明确的理论保证。我们为这个优化问题家族提出了一种原则的随机组成交流梯度下降(SCAGDA)算法。与需要近似距离度量计算的时间序列的传统方法不同,SCAGDA使用移动平均值接近基于GAK的距离。我们理论上分析了SCAGDA的收敛速率,并为基于GAK的距离的估计提供了强有力的理论支持。我们对现实世界基准测试的实验表明,与对抗性训练相比,使用依赖数据增强或损失函数的新定义的对抗训练相比,RO-TS会创建更强大的DNN。我们还证明了GAK在欧几里得距离上对时间序列数据的重要性。 RO-TS算法的源代码可在https://github.com/tahabelkhouja/robust-training-for-time-series上获得
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been taking role in many areas, thanks to their expressive power on graph-structured data. On the other hand, Mobile Ad-Hoc Networks (MANETs) are gaining attention as network technologies have been taken to the 5G level. However, there is no study that evaluates the efficiency of GNNs on MANETs. In this study, we aim to fill this absence by implementing a MANET dataset in a popular GNN framework, i.e., PyTorch Geometric; and show how GNNs can be utilized to analyze the traffic of MANETs. We operate an edge prediction task on the dataset with GraphSAGE (SAG) model, where SAG model tries to predict whether there is a link between two nodes. We construe several evaluation metrics to measure the performance and efficiency of GNNs on MANETs. SAG model showed 82.1 accuracy on average in the experiments.
translated by 谷歌翻译
Over-approximating the reachable sets of dynamical systems is a fundamental problem in safety verification and robust control synthesis. The representation of these sets is a key factor that affects the computational complexity and the approximation error. In this paper, we develop a new approach for over-approximating the reachable sets of neural network dynamical systems using adaptive template polytopes. We use the singular value decomposition of linear layers along with the shape of the activation functions to adapt the geometry of the polytopes at each time step to the geometry of the true reachable sets. We then propose a branch-and-bound method to compute accurate over-approximations of the reachable sets by the inferred templates. We illustrate the utility of the proposed approach in the reachability analysis of linear systems driven by neural network controllers.
translated by 谷歌翻译
Recently the focus of the computer vision community has shifted from expensive supervised learning towards self-supervised learning of visual representations. While the performance gap between supervised and self-supervised has been narrowing, the time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts, which hinders progress, imposes carbon cost, and limits societal benefits to institutions with substantial resources. Motivated by these issues, this paper investigates reducing the training time of recent self-supervised methods by various model-agnostic strategies that have not been used for this problem. In particular, we study three strategies: an extendable cyclic learning rate schedule, a matching progressive augmentation magnitude and image resolutions schedule, and a hard positive mining strategy based on augmentation difficulty. We show that all three methods combined lead up to 2.7 times speed-up in the training time of several self-supervised methods while retaining comparable performance to the standard self-supervised learning setting.
translated by 谷歌翻译
Channel charting (CC) is an unsupervised learning method allowing to locate users relative to each other without reference. From a broader perspective, it can be viewed as a way to discover a low-dimensional latent space charting the channel manifold. In this paper, this latent modeling vision is leveraged together with a recently proposed location-based beamforming (LBB) method to show that channel charting can be used for mapping channels in space or frequency. Combining CC and LBB yields a neural network resembling an autoencoder. The proposed method is empirically assessed on a channel mapping task whose objective is to predict downlink channels from uplink channels.
translated by 谷歌翻译
Adversarial training has been empirically shown to be more prone to overfitting than standard training. The exact underlying reasons still need to be fully understood. In this paper, we identify one cause of overfitting related to current practices of generating adversarial samples from misclassified samples. To address this, we propose an alternative approach that leverages the misclassified samples to mitigate the overfitting problem. We show that our approach achieves better generalization while having comparable robustness to state-of-the-art adversarial training methods on a wide range of computer vision, natural language processing, and tabular tasks.
translated by 谷歌翻译
We study the problem of profiling news media on the Web with respect to their factuality of reporting and bias. This is an important but under-studied problem related to disinformation and "fake news" detection, but it addresses the issue at a coarser granularity compared to looking at an individual article or an individual claim. This is useful as it allows to profile entire media outlets in advance. Unlike previous work, which has focused primarily on text (e.g.,~on the text of the articles published by the target website, or on the textual description in their social media profiles or in Wikipedia), here our main focus is on modeling the similarity between media outlets based on the overlap of their audience. This is motivated by homophily considerations, i.e.,~the tendency of people to have connections to people with similar interests, which we extend to media, hypothesizing that similar types of media would be read by similar kinds of users. In particular, we propose GREENER (GRaph nEural nEtwork for News mEdia pRofiling), a model that builds a graph of inter-media connections based on their audience overlap, and then uses graph neural networks to represent each medium. We find that such representations are quite useful for predicting the factuality and the bias of news media outlets, yielding improvements over state-of-the-art results reported on two datasets. When augmented with conventionally used representations obtained from news articles, Twitter, YouTube, Facebook, and Wikipedia, prediction accuracy is found to improve by 2.5-27 macro-F1 points for the two tasks.
translated by 谷歌翻译